88 research outputs found

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com

    Viabilidad económica de una planta de pirólisis de plásticos

    Get PDF
    [ES] El presente trabajo se ha realizado con el fin de analizar la viabilidad económica de la implantación de una planta de pirólisis de plásticos en un reactor spouted bed cónico, con una capacidad de 2500 kg/h de alimentación continua. En este caso se va a centrar en el tratamiento de una familia polimérica denominada poliolefinas (polietileno de alta y baja densidad y polipropileno), siendo este tipo de plásticos los constituyentes mayoritarios de los plásticos de desecho de los residuos sólido urbanos. Estas poliolefinas producen, en su descomposición pirolítica, diferentes fracciones: Gas (C1-C4), olefinas ligeras (etileno, propileno, buteno), gasolina (C5-C11), gasóleo (C12-C20) y ceras (C21+)

    An Adaptive Computational Intelligence Approach to Personalised Health Assessment and Immune Age Characterisation from Common Haematological Markers

    Full text link
    We introduce an approach to creating a simulated digital twin of the health of an individual based on a learning adaptive algorithm that learns the optimal reference values of the blood panel over time and associates an immune age score to compare with where if the biological age is lower than the score, an indication of wellness is provided. The score may also be useful for classification purposes. We demonstrate its efficacy against real and synthetic data from medically relevant cases, extreme cases, and empirical blood cell count data from 100K data records in the CDC NHANES survey that spans 13 years, from 2003 to 2016. We find that the score we introduce is informative when distinguishing healthy individuals from those with diseases, both self-reported and abnormal blood tests manifested, providing an entry-level score for patient triaging. We show that the score varies over time and is correlated with biological age, leading to the definition of an immune age as the inverse function of this relationship when different sets of analytes are taken only based on the results of an FBC or CBC test, providing clinical evidence of its potential relevance to the results of the CBC test, providing precision medicine and for personalised predictive healthcare.Comment: 30 pages + appendi

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet

    Conversion of HDPE into Value Products by Fast Pyrolysis Using FCC Spent Catalysts in a Fountain Confined Conical Spouted Bed Reactor

    Get PDF
    Continuous catalytic cracking of polyethylene over a spent fluid catalytic cracking (FCC) catalyst was studied in a conical spouted bed reactor (CSBR) with fountain confiner and draft tube. The effect of temperature (475-600 degrees C) and space-time (7-45 g(cat) min g(HDPE)(-1)) on product distribution was analyzed. The CSBR allows operating with continuous plastic feed without defluidization problems and is especially suitable for catalytic pyrolysis with high catalyst efficiency. Thus, high catalyst activity was observed, with waxes yield being negligible above 550 degrees C. The main product fraction obtained in the catalytic cracking was made up of C-5-C-11 hydrocarbons, with olefins being the main components. However, its yield decreased as temperature and residence time were increased, which was due to reactions involving cracking, hydrogen transfer, cyclization, and aromatization, leading to light hydrocarbons, paraffins, and aromatics. The proposed strategy is of great environmental relevance, as plastics are recycled using an industrial waste (spent FCC catalyst).This work was carried out with the financial support from Spain's ministries of Science, Innovation and Universities RTI2018-101678-BI00(MCIU/AEI/FEDER, UE) and RTI2018-098283-JI00(MCIU/AEI/FEDER, UE)) and Science and Innovation (PID2019-107357RB-I00 (MCI/AEI/FEDER, UE)), the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 823745, and the Basque Government (IT1218-19 and KK-2020/00107)
    • …
    corecore